On the Performance of Sparse Recovery via L_p-minimization (0<=p <=1)
نویسندگان
چکیده
It is known that a high-dimensional sparse vector x∗ in R can be recovered from low-dimensional measurements y = Ax∗ where Am×n(m < n) is the measurement matrix. In this paper, we investigate the recovering ability of `p-minimization (0 ≤ p ≤ 1) as p varies, where `p-minimization returns a vector with the least `p “norm” among all the vectors x satisfying Ax = y. Besides analyzing the performance of strong recovery where `p-minimization is required to recover all the sparse vectors up to certain sparsity, we also for the first time analyze the performance of “weak” recovery of `pminimization (0 ≤ p < 1) where the aim is to recover all the sparse vectors on one support with fixed sign pattern. When α(:= mn ) → 1, we provide sharp thresholds of the sparsity ratio that differentiates the success and failure via `p-minimization. For strong recovery, the threshold strictly decreases from 0.5 to 0.239 as p increases from 0 to 1. Surprisingly, for weak recovery, the threshold is 2/3 for all p in [0, 1), while the threshold is 1 for `1-minimization. We also explicitly demonstrate that `p-minimization (p < 1) can return a denser solution than `1-minimization. For any α < 1, we provide bounds of sparsity ratio for strong recovery and weak recovery respectively below which `p-minimization succeeds with overwhelming probability. Our bound of strong recovery improves on the existing bounds when α is large. In particular, regarding the recovery threshold, this paper argues that `p-minimization has a higher threshold with smaller p for strong recovery; the threshold is the same for all p for sectional recovery; and `1-minimization can outperform `p-minimization for weak recovery. These are in contrast to traditional wisdom that `p-minimization, though computationally more expensive, always has better sparse recovery ability than `1-minimization since it is closer to `0-minimization. Finally, we provide an intuitive explanation to our findings. Numerical examples are also used to unambiguously confirm and illustrate the theoretical predictions. ar X iv :1 01 1. 59 36 v1 [ cs .I T ] 2 6 N ov 2 01 0
منابع مشابه
Stable Recovery of Sparse Signals via $l_p-$Minimization
In this paper, we show that, under the assumption that ‖e‖2 ≤ ǫ, every k−sparse signal x ∈ R can be stably (ǫ 6= 0) or exactly recovered (ǫ = 0) from y = Ax+ e via lp−mnimization with p ∈ (0, p̄], where
متن کاملOn the Performance of Sparse Recovery via
It is known that a high-dimensional sparse vector x∗ in Rn can be recovered from low-dimensional measurements y = Ax∗ where Am×n(m < n) is the measurement matrix. In this paper, with A being a random Gaussian matrix, we investigate the recovering ability of `p-minimization (0 ≤ p ≤ 1) as p varies, where `p-minimization returns a vector with the least `p quasi-norm among all the vectors x satisf...
متن کاملWhen is P such that l_0-minimization Equals to l_p-minimization
In this paper, we present an analysis expression of p∗(A, b) such that the unique solution to l0-minimization also can be the unique solution to lp-minimization for any 0 < p < p ∗(A, b). Furthermore, the main contribution of this paper isn’t only the analysis expressed of such p∗(A, b) but also its proof. Finally, we display the results of two examples to confirm the validity of our conclusion...
متن کاملExact Low-rank Matrix Recovery via Nonconvex Mp-Minimization
The low-rank matrix recovery (LMR) arises in many fields such as signal and image processing, statistics, computer vision, system identification and control, and it is NP-hard. It is known that under some restricted isometry property (RIP) conditions we can obtain the exact low-rank matrix solution by solving its convex relaxation, the nuclear norm minimization. In this paper, we consider the n...
متن کاملComplexity of Unconstrained L_2-L_p Minimization
We consider the unconstrained L2-Lp minimization: find a minimizer of ‖Ax−b‖2+λ‖x‖p for given A ∈ R, b ∈ R and parameters λ > 0, p ∈ [0, 1). This problem has been studied extensively in variable selection and sparse least squares fitting for high dimensional data. Theoretical results show that the minimizers of the L2-Lp problem have various attractive features due to the concavity and non-Lips...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- IEEE Trans. Information Theory
دوره 57 شماره
صفحات -
تاریخ انتشار 2011